In the software world, we like to draw the analogy between software architecture and the architecture of buildings. It’s easy, for example, to draw parallels to ‘waterfall’ development processes in which, invariably, requirements are set and an architectural plan is established before the software is built.
On the whole, it isn’t too difficult to understand the analogy. A clear objective of both types of architectural design is to allow for subsequent change. The less predictable the change, the harder it is to do. Although it’s a reasonable task to build an extension on the side of a house, few people would expect converting a bathroom into a ballroom would be quick or easy without altering the architecture.
So it goes with software.
Once the software architecture is in place, it may subsequently be discovered the requirements have changed or were never fully understood. How easy it is to change the software depends on whether it was architected in such a way that alterations don’t significantly conflict with the original design.
The more extreme agile methodologies deviate further from this blueprint. Applications are written in smaller slices, delivering value to the user faster but reducing visibility of the overall design. There often isn’t any one individual responsible for designing the architecture, and the decision-making is delegated to the developers incrementally working on the software.
Because cycle times are reduced, the team can get feedback faster and respond more quickly to changing requirements, but how easy it is to implement changes still depends on whether they are congruous with the architecture. Without an up-front architecture, the application may be more malleable – or, without sufficient vision, an amorphous blob.
In either case, if you end up with a mismatch between new or changed requirements and architecture, or no architecture at all, a product can easily become difficult to work on.
The attack of the big ball of mud
A big ball of mud is a software system without a perceivable architecture and is one of the two forms of legacy code I’ve encountered. Perhaps the developer who first started working on it had some clear architecture in mind, guided by the requirements of the first version. Who knows? It’s seen code committed by dozens of developers over the years, each of whom just had to make the change they wanted.
If the software isn’t currently wired up in a way that makes it easy to make a particular change, substantial rework would take far too long. Best to just weaken the abstractions and connect whatever parts need to be connected to make it work. Sure, the ballroom still has the bathroom tiles and the hallway is full of redirected pipes, but it’s possible to have a dance in there, and the plumbing still works. Good enough.
There are very practical reasons for doing this. It often really isn’t reasonable to substantially re-architect an existing product, and user value is still delivered. But as layer upon layer of modifications are made, it becomes increasingly difficult to understand how anything is connected. New team members become anxious about making changes because it’s no longer clear which walls are load-bearing and it’s too easy to puncture one of the redirected pipes when trying to install a chandelier.
At this point, there are no easy solutions. You can redesign and reconstruct the building, but it takes time and money. If you do, you’ll hopefully end up with an architecture that’s sensible for today, but there’s nothing to say the same process won’t start all over again as more unanticipated requirements come up.
The most pragmatic thing to do is make gradual improvements where possible, targeting the areas which make it most difficult to work on. It may never result in the perfect architecture, but with persistence it will ease the pain.
When well-meaning architecture goes wrong
Alternatively, there may be a very clear architecture which binds everything together, its enduring existence a testament to past architects and the restraint and respect of developers who have worked on it since its implementation.
It’s just a pity the architecture is completely wrong when viewed with the current requirements in mind. All the walls being entirely made of glass make for an amazing car showroom, but it’s terrible for a bathroom, and it’s difficult to see how this conversion can be done without replacing everything.
That the codebase isn’t a ball of mud shows some thought was put into architecture – quite possibly a lot of thought. But the design the original architects were working to was too tightly bound to the original requirements. Hindsight being what it is, it’s easy to criticize the decisions they made at the time – what were they thinking, using something so easily breakable? And what about privacy?
But we can never anticipate everything, and attempting to do so can result in a closely related problem: an architecture which is too generalized. It’s very clever the house is constructed using a custom framework of rails to which wall segments are attached, but this is wasted effort if the floor plan never changes, and anyone new to the team has to spend time learning about this specific way of working. And it’s impossible to anticipate every change – the framework will still have to be modified if a square room is to be made into a round one.
Is there a halfway house?
Clearly, an architecture that is bound either too loosely or too tightly to the original requirements is a problem. The former increases the overhead of working on the project without necessarily realizing any gain, especially if the original architects failed to envisage the particular change you have to implement. The latter makes unanticipated changes unreasonably difficult to implement.
Once again, there are no easy answers if there is a mismatch between the architecture and the change required. Attempting to work around it despite its unsuitability may be a pragmatic choice at first, but is a step on the road towards creating a tangled blob.
Spending some time to massage the architecture in a direction that allows the work to be done may be the only sensible option at this point, but this period may be slow and painful, and concerted effort might be needed to transform it entirely into a better state.
This seems to leave us with an unsolvable conundrum. We can’t anticipate every change that might some day be required, but if we ever architect anything in a way that makes it difficult to implement changes later, some degree of pain appears inevitable.
There are, however, some principles we can apply which can help mitigate this:
- Components should be small and loosely-coupled. Component-based programming is by far the most significant theme in architectural discussions. Small components can be more easily understood and their internal architectures can be simple. If they need to be substantially modified or rewritten, this can be done more quickly and without having any effect on the rest of the application (provided tests exist against the API that will be used). Interfaces should also be small and focused. This may extend to microservices, but the idea is not new: this is not unlike the way command-line tools are chained in UNIX.
- Developers should spend time thinking about the big picture. It’s easy for developers to focus on fixing immediate code smells, but these are actually relatively unimportant. It’s more important to think about interfaces and the tests that check their behavior, because it’s these interfaces which will be difficult to change in the future.
- It should be clear where everything is. If the structure of the project doesn’t make it clear where particular functionality is or would be, it is easier for the architecture to be eroded and for duplication to arise. A clear structure also helps new team members to start work on the project and orientate themselves.
- Design for now, but be open to change. Don’t write functionality you don’t need yet, or seek to make the design generic enough to do anything without a good reason. It may never turn out to be necessary, or you may find you want to implement things differently when it does (think YAGNI, the acronym for “You aren’t gonna need it”, the extreme programming principle that states functionality should not be added until deemed necessary). But create seams where change seems likely so it’s easier to make the change if the need arises. Components should assume as little as possible about each other.
A matter of opinion
Just as we can’t assume all true Scotsmen wear kilts, we can’t say all good architects agree on every principle. While it is true a consistent approach is beneficial, the specifics of how to approach particular aspects of architecture often differ:
- The single responsibility principle. One of the SOLID principles, this states each component should have responsibility for one aspect of functionality. This can be applied at different levels: should an assembly or service have a single responsibility? How is a responsibility defined? It can also be argued the single responsibility principle should be looked at upside down: the first priority should be for each function to be implemented in a single component. Concerns such as security are confused and hard to understand or change if they are split across multiple services.
- The role of interfaces. Interfaces can be viewed as the internal documentation of the codebase and a means to provide options for change. Extensive use of interfaces describing all public members makes it easy to introduce new implementations and to use patterns such as null object and decoration. Alternatively, interfaces can be considered more strictly as a contract of behavior, as opposed to classes, which define things. This means interfaces should not have properties, and concrete classes should be used in signatures unless there are multiple implementations of a particular type of object rather than using interfaces everywhere.
- What a component should look like. In .NET, a component could be as small as a class or namespace folder, be defined by an assembly and its public methods, or perhaps be an independent executable or service. Having many assemblies or services allows for stronger isolation, but also potentially more interop code and lower ability to follow code paths. Components can be separated within a single assembly, which can be easier to work with, but this requires greater discipline to avoid coupling components unnecessarily.
- What to unit – test. High unit test coverage around interfaces is often expressed as very desirable, because this not only provides confidence the code is correct, but also protects the system from unintended change. However, sometimes it is argued it is not always necessary for components with few dependencies and which change infrequently, or that a particular interface definition changing is unimportant, so the tests should instead check components can still communicate as expected.
Conclusions
If you don’t take any time to think about architecture, or if you try to force a tightly-coupled architecture to bend in ways it wasn’t designed to, it’s easy to become trapped. The way to minimize this pain is to spend a little time up-front thinking about how to implement a feature, ensuring functions are split between small components and considering how they should interact with each other.
This doesn’t mean the architecture will never need to change, but it reduces the impact of change on the codebase, and allows each component to be understandable and testable. When making changes is easier, there is less temptation to break abstractions, and the big ball of mud can be minimized – and, hopefully, eliminated.
Load comments